Particle Gibbs with Ancestor Resampling for Probabilistic Programs

نویسندگان

  • Jan-Willem van de Meent
  • Hongseok Yang
  • Frank Wood
چکیده

In this paper we develop an implementation of a recently proposed inference algorithm known as particle Gibbs with ancestral resampling (PGAS) [Lindsten et al., 2012] for higher-order probabilistic programming languages. Higher-order probabilistic languages such as Church [Goodman et al., 2008], Venture [Mansinghka et al., 2014] and Anglican [Wood et al., 2014] allow statistical models to be represented as programs. Evaluation of a program F , which we will here represent as a sequence of statements F = f1:N , instantiates random values x for some subset of the expressions, which can be thought of as a sample from a prior p(x |F ). A program F [x] where all sampled values are substituted as constants is once again deterministic. This substitution also forms the basis for posterior inference, which samples the unconstrained variables from p(x |F [y]), where constants y are substituted for some random values in F [y]. In higher-order languages it is straightforward to define non-parametric models, which can instantiate a arbitrary numbers of variables, or specify model structures recursively in terms of a generative grammar. This offers a greater flexibility relative to declarative systems such as Infer.NET [Minka et al., 2010] and STAN [STAN Development Team, 2014], which restrict the model definition syntax in order to omit higher-order functions and recursion. At the same time this expressivity makes it difficult to characterize the support XF = {x : p(x |F ) > 0}, since neither the number of random values or their conditional dependencies are necessarily fixed for all possible execution histories.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Particle Gibbs with Ancestor Sampling for Probabilistic Programs

Particle Markov chain Monte Carlo techniques rank among current state-of-the-art methods for probabilistic program inference. A drawback of these techniques is that they rely on importance resampling, which results in degenerate particle trajectories and a low effective sample size for variables sampled early in a program. We here develop a formalism to adapt ancestor resampling, a technique th...

متن کامل

Particle gibbs with ancestor sampling

Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used ...

متن کامل

Sequential Monte Carlo as Approximate Sampling: bounds, adaptative resampling via -ESS, and an application to Particle Gibbs

Sequential Monte Carlo (SMC) algorithms were originally designed for estimating intractable conditional expectations within state-space models, but are now routinely used to generate approximate samples in the context of general-purpose Bayesian inference. In particular, SMC algorithms are often used as subroutines within larger Monte Carlo schemes, and in this context, the demands placed on SM...

متن کامل

On particle Gibbs sampling

The particle Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm to sample from the full posterior distribution of a state-space model. It does so by executing Gibbs sampling steps on an extended target distribution defined on the space of the auxiliary variables generated by an interacting particle system. This paper makes the following contributions to the theoretical study of this a...

متن کامل

Infinite Hidden Semi-Markov Modulated Interaction Point Process

The correlation between events is ubiquitous and important for temporal events modelling. In many cases, the correlation exists between not only events’ emitted observations, but also their arrival times. State space models (e.g., hidden Markov model) and stochastic interaction point process models (e.g., Hawkes process) have been studied extensively yet separately for the two types of correlat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015